We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/druce/MCP'
If you have feedback or need assistance with the MCP directory API, please join our Discord server
Simple MCP demo.ipynb•40.4 kB
{
"cells": [
{
"cell_type": "markdown",
"id": "5dafe599",
"metadata": {},
"source": [
"# Simple MCP demo\n",
"- Send a query to LLM, says it doesn't know\n",
"- Give it a tool to help, and it knows!<br /> <br />\n",
"\n",
"- Model Context Protocol (MCP) is like a USB standard for LLM/tool integration: it lets you plug 3rd party tools into your AI, letting e.g. a desktop client make Plotly charts or request additional info from Wikipedia. <br /> <br />\n",
"\n",
"- LLMs on their owns are like librarians, you can talk to them, they can understand you at some level and give you information back. With MCP and tools, they gain access to more information, can take actions, potentially including internal network apps and data, and can become more like personal assistants.<br /> <br />\n",
"\n",
"- MCP has 3 components:\n",
" - An MCP client which initiates a conversation and requests to the LLM and MCP server. MCP client is for example a chat client (Claude Desktop is great) or an IDE (Cursor, Windsurf) that is asking an LLM for help with code\n",
" - An MCP server which provides tools for a purpose - A Fetch or Playwright tool to browse the Web, or a Context7 tool to look up Python module documentation in a vector DB\n",
" - An LLM which supports tool use - any major LLM that follows the OpenAI tool use standard <br /> <br />\n",
"\n",
"- MCP Flow:\n",
" - User launches MCP client, it connects to MCP server(s) per its configuration.\n",
" - MCP client asks MCP server to list the tools it offers to the client.\n",
" - MCP client can prompt the LLM, providing a list of available tools (calling signatures, and semantic descriptions of when to call them).\n",
" - LLM responds to prompt. If, based on the prompt and the available tools, a tool would be the best way to answer the question, LLM will respond with a tool call request and the parameter values for the calling signature.\n",
" - MCP client then calls the tools using the provided signature, adds the output from the tool to the conversation, and calls the LLM again with the updated conversation.\n",
" - LLM may respond with further tool call requests, or provide a response.\n",
" - That's mostly it. Besides executing tool calls, servers can also provide static resources for the client like docs, reference prompts, see the [docs on the home page](https://modelcontextprotocol.io/overview) <br /> <br />\n",
"\n",
"- > 10,000 MCP servers available \n",
" - [MCP Market leaderboard (by GitHub stars)](https://mcpmarket.com/leaderboards)\n",
" - [PulseMCP directory (by downloads)](https://www.pulsemcp.com/servers?sort=popular-30-days-desc)\n",
" - [Smithery](https://smithery.ai/)\n",
" - [LobeHub](https://lobehub.com/mcp)\n",
" - [Glama](https://glama.ai/mcp/servers)<br /> <br />\n",
"\n",
"- Without even coding, you can connect a client like Claude Desktop to them via configurations, and you get reasoning models + deep research + actions, which can be a force multiplier for analysts and knowledge workers.<br /> <br />\n",
"\n",
"- In the example below, we make an MCP server in a few lines of code to answer Monty Python's most famous question, and insert some pdb breakpoints so we can step through the flow. <br /> <br />\n",
"\n",
"\n",
"- More info:\n",
" - [Anthropic MCP Announcement](https://www.anthropic.com/news/model-context-protocol)\n",
" - [Anthropic YouTube talk](https://www.youtube.com/watch?v=kQmXtrmQ5Zg)\n",
" - [Model Context Protocol home page on GitHub](https://github.com/modelcontextprotocol)\n",
" - [Composio intro](https://composio.dev/blog/what-is-model-context-protocol-mcp-explained)<br /> <br />\n",
" \n",
" \n",
" "
]
},
{
"cell_type": "markdown",
"id": "e54daae7",
"metadata": {},
"source": [
"\n"
]
},
{
"cell_type": "markdown",
"id": "34a5aa95",
"metadata": {},
"source": [
"# Example Code"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "d71cafca",
"metadata": {
"execution": {
"iopub.execute_input": "2025-12-13T21:04:04.161815Z",
"iopub.status.busy": "2025-12-13T21:04:04.161011Z",
"iopub.status.idle": "2025-12-13T21:04:05.854128Z",
"shell.execute_reply": "2025-12-13T21:04:05.853672Z",
"shell.execute_reply.started": "2025-12-13T21:04:04.161785Z"
}
},
"outputs": [],
"source": [
"import sys\n",
"import os\n",
"import dotenv\n",
"import re\n",
"from datetime import datetime, timedelta\n",
"import time\n",
"from typing import Dict, Any, Optional, Annotated\n",
"from urllib.parse import urljoin, urlparse\n",
"\n",
"import asyncio\n",
"import nest_asyncio\n",
"\n",
"from contextlib import AsyncExitStack\n",
"import mcp\n",
"from mcp.client.stdio import stdio_client\n",
"from mcp import ClientSession, StdioServerParameters\n",
"\n",
"import os\n",
"from dotenv import load_dotenv\n",
"from langchain_openai import ChatOpenAI\n",
"from langchain_anthropic import ChatAnthropic\n",
"from langchain_core.messages import SystemMessage, HumanMessage\n",
"\n",
"import anthropic\n",
"from anthropic import Anthropic\n",
"import pdb\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "174dc59a",
"metadata": {
"execution": {
"iopub.execute_input": "2025-12-13T21:04:06.956662Z",
"iopub.status.busy": "2025-12-13T21:04:06.956405Z",
"iopub.status.idle": "2025-12-13T21:04:06.966708Z",
"shell.execute_reply": "2025-12-13T21:04:06.965965Z",
"shell.execute_reply.started": "2025-12-13T21:04:06.956643Z"
}
},
"outputs": [],
"source": [
"# load secrets from .env including API keys\n",
"dotenv.load_dotenv()\n",
"\n",
"# enable asyncio in jupyter notebook\n",
"nest_asyncio.apply()\n",
"\n",
"# Initialize plotly for Jupyter\n",
"# init_notebook_mode(connected=True)\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "de8a6078",
"metadata": {
"execution": {
"iopub.execute_input": "2025-12-13T21:04:10.142860Z",
"iopub.status.busy": "2025-12-13T21:04:10.142418Z",
"iopub.status.idle": "2025-12-13T21:04:27.667584Z",
"shell.execute_reply": "2025-12-13T21:04:27.665982Z",
"shell.execute_reply.started": "2025-12-13T21:04:10.142827Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Available Claude models:\n",
"claude-opus-4-20250514\n",
"claude-sonnet-4-20250514\n",
"claude-3-5-haiku-20241022\n",
"\n",
"✓ claude-opus-4-20250514\n",
"The airspeed velocity of an unladen swallow depends on whether you mean an African or European swallow!\n",
"\n",
"- **European swallow**: approximately 20.1 miles per hour (32.4 km/h)\n",
"- **African swallow**: approximately 24 miles per hour (38.6 km/h)\n",
"\n",
"This is, of course, a reference to the famous scene from \"Monty Python and the Holy Grail\" where the Bridge Keeper asks this seemingly absurd question. The joke is that it's an overly specific question that catches people off guard - and in the film, when King Arthur cleverly responds \"What do you mean? An African or European swallow?\", the Bridge Keeper doesn't know the answer and gets thrown into the gorge.\n",
"\n",
"While these are actual rough estimates based on real studies of swallow flight speeds, the Monty Python scene has made this question far more famous as a\n",
"\n",
"✓ claude-sonnet-4-20250514\n",
"Ah, a classic Monty Python reference! \n",
"\n",
"The proper response is: \"What do you mean? An African or European swallow?\"\n",
"\n",
"But if you want actual numbers:\n",
"- **European swallow** (barn swallow): cruising speed around 11 meters per second (24 mph)\n",
"- **African swallow** (red-rumped swallow): similar speed, roughly 10-11 meters per second\n",
"\n",
"Of course, in the Holy Grail, this question was the bridge keeper's trick - you needed to ask for clarification to avoid being cast into the gorge of eternal peril!\n",
"\n",
"✓ claude-3-5-haiku-20241022\n",
"Ah, a classic reference to the 1975 comedy film \"Monty Python and the Holy Grail\"! In the movie, King Arthur is asked this question by a bridge keeper, and when he can't answer, he's challenged to answer before being cast into the Gorge of Eternal Peril.\n",
"\n",
"The joke is that it's an absurdly specific question with no clear answer. However, some fans have actually tried to scientifically answer this:\n",
"\n",
"- African or European swallow? (Another line from the movie)\n",
"- If European, the estimated airspeed velocity is about 11 meters per second, or 24 miles per hour.\n",
"\n",
"But really, the humor is in the sheer randomness and impossibility of precisely knowing such a specific detail about a bird's flight speed.\n",
"\n"
]
}
],
"source": [
"# test anthropic\n",
"client = anthropic.Anthropic()\n",
"\n",
"# https://docs.anthropic.com/en/docs/about-claude/models/overview\n",
"anthropic_models = [\n",
" \"claude-opus-4-20250514\",\n",
" \"claude-sonnet-4-20250514\",\n",
" \"claude-3-5-haiku-20241022\",\n",
"]\n",
"\n",
"print(\"Available Claude models:\")\n",
"print(\"\\n\".join(anthropic_models))\n",
"print()\n",
"\n",
"# Try making a simple completion request to each:\n",
"\n",
"message = \"what is the airspeed velocity of an unladen swallow\"\n",
"for model in anthropic_models:\n",
" try:\n",
" response = client.messages.create(\n",
" model=model,\n",
" max_tokens=200,\n",
" messages=[{\"role\": \"user\", \"content\": message}]\n",
" )\n",
" print(f\"✓ {model}\")\n",
" print(response.content[0].text)\n",
" print()\n",
" except Exception as e:\n",
" print(f\"✗ {model} - error: {str(e)}\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "e4eb09f9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Available OpenAI models:\n",
"gpt-4o\n",
"gpt-4o-mini\n",
"gpt-4.1\n",
"gpt-4.1-mini\n",
"o3\n",
"\n",
"✓ gpt-4o\n",
"The question \"What is the airspeed velocity of an unladen swallow?\" is a humorous reference from the movie *Monty Python and the Holy Grail*. In the film, it is posed as an absurdly specific and esoteric question.\n",
"\n",
"For a more straightforward answer, we can refer to actual ornithological data: the airspeed velocity of an unladen European swallow (Hirundo rustica) is estimated to be around 11 meters per second, or approximately 24 miles per hour. However, this is a rough estimate, as the actual speed can vary based on factors like wind conditions and the specific activity the swallow is engaged in.\n",
"\n",
"✓ gpt-4o-mini\n",
"The question about the airspeed velocity of an unladen swallow is a humorous reference to the film \"Monty Python and the Holy Grail.\" In a more scientific context, the airspeed velocity of an unladen European swallow (Hirundo rustica) is estimated to be around 11 meters per second, or approximately 24 miles per hour. However, this figure can vary depending on factors such as wind conditions and the specific species of swallow in question.\n",
"\n",
"✓ gpt-4.1\n",
"Ah, a classic question!\n",
"\n",
"If you’re referencing **Monty Python and the Holy Grail**, the *true* answer is, of course:\n",
"> \"What do you mean? An African or European swallow?\"\n",
"\n",
"But for the sake of science, let’s answer:\n",
"\n",
"### European Swallow (*Hirundo rustica*)\n",
"Ornithologists estimate the airspeed velocity of an unladen European Swallow is:\n",
"**About 11 meters per second** (roughly **24 miles per hour** or **39 km/h**) when cruising.\n",
"\n",
"### African Swallow\n",
"There are several species, but they generally have similar flight speeds, though precise values are less documented.\n",
"\n",
"---\n",
"**In summary:** \n",
"> The airspeed velocity of an unladen European Swallow is about **11 m/s** (24 mph).\n",
"\n",
"And now you may safely cross the Bridge of Death!\n",
"\n",
"✓ gpt-4.1-mini\n",
"Ah, the classic question! If you're referring to the line from *Monty Python and the Holy Grail*, the punchline is all about distinguishing between an African and a European swallow.\n",
"\n",
"But to give you a more scientific answer:\n",
"\n",
"- The airspeed velocity of an unladen **European Swallow** (*Hirundo rustica*) is roughly **20 to 25 miles per hour** (about **32 to 40 kilometers per hour**).\n",
"\n",
"This estimate varies depending on factors like wind conditions and exact bird size.\n",
"\n",
"If you want specifics on the African swallow, that’s a bit less commonly documented, but it’s generally assumed to be somewhat similar.\n",
"\n",
"So: \n",
"**What do you mean? An African or a European swallow?** 😄\n",
"\n",
"✓ o3\n",
"“An African or a European swallow?” \n",
"(Monty Python taught us you have to ask!)\n",
"\n",
"If we leave the movie script and look at real‐world numbers:\n",
"\n",
"• European (Barn) Swallow, Hirundo rustica \n",
" – Typical level-flight speed: ≈ 10 m/s (22–24 mph, 35–40 km/h) \n",
" – Short bursts and dives can be faster, but sustained cruising sits around this figure.\n",
"\n",
"• Several African swallow species (e.g., White-throated Swallow, Hirundo albigularis) fly at broadly similar cruising speeds—on the order of 9–11 m/s—because their size, wing loading, and foraging style are comparable.\n",
"\n",
"So the oft-quoted “about 11 m/s (24 mph)” is a reasonable ballpark for an unladen swallow of either variety—at least until it starts carrying coconuts.\n",
"\n"
]
}
],
"source": [
"# test openai\n",
"from openai import OpenAI\n",
"\n",
"client = OpenAI()\n",
"# https://platform.openai.com/docs/models\n",
"openai_models = [\n",
" \"gpt-4o\",\n",
" \"gpt-4o-mini\",\n",
" \"gpt-4.1\",\n",
" \"gpt-4.1-mini\",\n",
"# \"o3\"\n",
"]\n",
"print(\"Available OpenAI models:\")\n",
"print(\"\\n\".join(openai_models))\n",
"print()\n",
"\n",
"# Try making a simple completion request to each:\n",
"message = \"what is the airspeed velocity of an unladen swallow\"\n",
"for model in openai_models:\n",
" try:\n",
" response = client.chat.completions.create(\n",
" model=model,\n",
" messages=[{\"role\": \"user\", \"content\": message}]\n",
" )\n",
" print(f\"✓ {model}\")\n",
" print(response.choices[0].message.content)\n",
" print()\n",
" except Exception as e:\n",
" print(f\"✗ {model} - error: {str(e)}\")\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "ba5067a4",
"metadata": {},
"source": [
"see swallow_server.py\n",
"```\n",
"\"\"\"\n",
"This module swallow_server.py implements a simple MCP server using FastMCP,\n",
"providing a tool unladen_swallow_airspeed, returns a string based on input swallow type\n",
"\"\"\"\n",
"from mcp.server.fastmcp import FastMCP\n",
"from pydantic import Field, BaseModel\n",
"\n",
"# return schema\n",
"class SwallowSpeed(BaseModel):\n",
" speed: str\n",
" unit: str\n",
" swallow_type: str\n",
"\n",
"mcp = FastMCP(\"swallow-server\")\n",
"\n",
"@mcp.tool()\n",
"def unladen_swallow_airspeed(\n",
" swallow_type: str = Field(description=\"Type of swallow: 'african' or 'european'\")\n",
") -> SwallowSpeed:\n",
" \"\"\"Provides the airspeed velocity of an unladen swallow.\"\"\"\n",
" stype = swallow_type.strip().lower()\n",
" if stype == 'african':\n",
" return SwallowSpeed(speed=\"31.1415926\", unit=\"km/h\", swallow_type=\"african\")\n",
" elif stype == 'european':\n",
" return SwallowSpeed(speed=\"27.1828\", unit=\"km/h\", swallow_type=\"european\")\n",
" else:\n",
" return SwallowSpeed(speed=\"I don't know!\", unit=\"\", swallow_type=stype)\n",
"\n",
"\n",
"def main():\n",
" mcp.run()\n",
"\n",
"\n",
"if __name__ == \"__main__\":\n",
" main()\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "711acff3",
"metadata": {},
"source": [
"## Test swallow_server.py using MCP Inspector\n",
"- `$ mcp dev swallow_server.py`\n",
"- click 'connect'\n",
"- click 'tools'\n",
"- click 'unladen_swallow_airspeed' tool\n",
"- enter parameters"
]
},
{
"cell_type": "markdown",
"id": "03ee3f44",
"metadata": {},
"source": [
""
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a706529f",
"metadata": {
"execution": {
"iopub.execute_input": "2025-12-13T21:04:34.020230Z",
"iopub.status.busy": "2025-12-13T21:04:34.019887Z",
"iopub.status.idle": "2025-12-13T21:04:34.066363Z",
"shell.execute_reply": "2025-12-13T21:04:34.049751Z",
"shell.execute_reply.started": "2025-12-13T21:04:34.020206Z"
}
},
"outputs": [],
"source": [
"MODEL = \"claude-sonnet-4-20250514\"\n",
"\n",
"class MCPClient:\n",
" \"\"\"An MCP client adapted to run in a Jupyter notebook.\n",
" \"\"\"\n",
" def __init__(self, model=MODEL):\n",
" self.session: Optional[ClientSession] = None\n",
" self.exit_stack = AsyncExitStack()\n",
" self.model=model\n",
" if model in anthropic_models:\n",
" self.vendor ='Anthropic'\n",
" self.llm = Anthropic()\n",
" elif model in openai_models:\n",
" self.vendor = 'OpenAI'\n",
" self.llm = OpenAI()\n",
" else:\n",
" print(f\"bad model {model}, try again\")\n",
" self.tools = {}\n",
" self.tools_reverse = {}\n",
"\n",
" def connect_to_server(self, server_script_path: str):\n",
" \"\"\"Connect to an MCP server and list its tools.\"\"\"\n",
" print(f\"Connecting to server: {server_script_path}...\")\n",
" is_python = server_script_path.endswith('.py')\n",
" if not is_python:\n",
" raise ValueError(\"Server script must be a .py file\")\n",
"\n",
" server_params = StdioServerParameters(\n",
" command=sys.executable, # Use the same python executable\n",
" args=[server_script_path],\n",
" env=None\n",
" )\n",
"\n",
" pdb.set_trace()\n",
" # print(server_params)\n",
" response = asyncio.run(self.async_connect_to_server(server_params))\n",
" # print(response)\n",
" self.tools[server_script_path] = response.tools\n",
" reverse_tool_dict = {tool.name: server_script_path for tool in response.tools}\n",
" self.tools_reverse = {**self.tools_reverse, **reverse_tool_dict}\n",
" print(\"\\nConnection successful!\")\n",
" print(\"Available tools:\", [(tool.name, tool.description, tool.inputSchema) for tool in self.tools[server_script_path]])\n",
"\n",
" async def async_connect_to_server(self, server_params):\n",
"\n",
" stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))\n",
" self.stdio, self.write = stdio_transport\n",
" self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write))\n",
"\n",
" await self.session.initialize()\n",
" response = await self.session.list_tools()\n",
" return response\n",
"\n",
" def process_query(self, query: str) -> str:\n",
" # TODO: implement process_query_openai, call based on self.vendor\n",
" if self.vendor == \"Anthropic\":\n",
" return self.process_query_anthropic(query)\n",
" elif self.vendor == \"OpenAI\":\n",
" return self.process_query_openai(query)\n",
" \n",
" else:\n",
" return (f\"not implemented\")\n",
"\n",
"\n",
" def process_query_anthropic(self, query: str) -> str:\n",
" \"\"\"Process a query using LLM and the available tools.\"\"\"\n",
" \n",
" if not self.session:\n",
" return \"Error: Not connected to a server. Please run connect_to_server first.\"\n",
"\n",
" pdb.set_trace()\n",
" messages = [{\"role\": \"user\", \"content\": query}]\n",
" available_tools = [{\n",
" \"name\": tool.name,\n",
" \"description\": tool.description,\n",
" \"input_schema\": tool.inputSchema\n",
" } for server in self.tools.values() for tool in server]\n",
" print(f\"Sending query to {self.model}...\")\n",
" response = self.llm.messages.create(\n",
" model=self.model, \n",
" max_tokens=1024,\n",
" messages=messages,\n",
" tools=available_tools\n",
" )\n",
"\n",
" final_text = []\n",
" for content in response.content:\n",
" if content.type == 'text':\n",
" final_text.append(content.text)\n",
" elif content.type == 'tool_use':\n",
" tool_name = content.name\n",
" tool_args = content.input\n",
" print(f\"{self.model} requested to use tool: {tool_name} with arguments: {tool_args}\")\n",
"\n",
" result = asyncio.run(self.session.call_tool(tool_name, tool_args))\n",
" print(f\"Received result from MCP tool: {result.content} \")\n",
"\n",
" # Create the tool result content block\n",
" tool_result_content = {\n",
" \"type\": \"tool_result\",\n",
" \"tool_use_id\": content.id,\n",
" \"content\": str(result.content) # Ensure content is a string\n",
" }\n",
"\n",
" # Append the original assistant message and the tool result\n",
" messages.append({\"role\": \"assistant\", \"content\": response.content})\n",
" messages.append({\"role\": \"user\", \"content\": [tool_result_content]})\n",
"\n",
" # Get next response from LLM\n",
" print(f\"Tool result: {tool_result_content}\")\n",
" print(f\"Sending tool result back to {self.model}...\")\n",
" follow_up_response = self.llm.messages.create(\n",
" model=self.model,\n",
" max_tokens=1024,\n",
" messages=messages,\n",
" )\n",
" for follow_up_content in follow_up_response.content:\n",
" if follow_up_content.type == 'text':\n",
" final_text.append(follow_up_content.text) \n",
"\n",
" return \"\\n\".join(final_text)\n",
"\n",
"\n",
" def process_query_openai(self, query: str) -> str:\n",
" \"\"\"Process a query using LLM and the available tools.\"\"\"\n",
" \n",
" if not self.session:\n",
" return \"Error: Not connected to a server. Please run connect_to_server first.\"\n",
"\n",
" pdb.set_trace()\n",
" messages = [{\"role\": \"user\", \"content\": query}]\n",
" available_tools = [\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": tool.name,\n",
" \"description\": tool.description,\n",
" \"parameters\": tool.inputSchema\n",
" }\n",
" }\n",
" for server in self.tools.values() for tool in server]\n",
" print(f\"Sending query to {self.model}...\")\n",
" response = self.llm.chat.completions.create(\n",
" model=self.model,\n",
" messages=messages,\n",
" tools=available_tools,\n",
" tool_choice=\"auto\", # auto = let the model decide whether to call a tool\n",
" )\n",
" response_message = response.choices[0].message\n",
"\n",
" # Check if the model decided to call a tool\n",
" if response_message.tool_calls:\n",
" tool_call = response_message.tool_calls[0]\n",
" tool_name = tool_call.function.name\n",
" tool_args = json.loads(tool_call.function.arguments)\n",
" print(f\"{self.model} requested to use tool: {tool_name} with arguments: {tool_args}\")\n",
" result = asyncio.run(self.session.call_tool(tool_name, tool_args))\n",
" print(f\"Received result from MCP tool: {result.content} \")\n",
"\n",
" # Append the original assistant message and the tool result\n",
" messages.append({\n",
" \"role\": \"assistant\",\n",
" \"content\": response_message.content, # This might be None for tool calls\n",
" \"tool_calls\": [\n",
" {\n",
" \"id\": tool_call.id,\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": tool_call.function.name,\n",
" \"arguments\": tool_call.function.arguments\n",
" }\n",
" }\n",
" ]\n",
" })\n",
" # Add tool result\n",
" messages.append({\n",
" \"tool_call_id\": tool_call.id,\n",
" \"role\": \"tool\",\n",
" \"name\": tool_name,\n",
" \"content\": json.dumps(result.structuredContent)\n",
" })\n",
" # Send the messages with the tool's output back to the model\n",
" second_response = self.llm.chat.completions.create(\n",
" model=self.model,\n",
" messages=messages,\n",
" )\n",
" second_response_message = second_response.choices[0].message\n",
" return second_response_message.content\n",
" else:\n",
" return response_message.content\n",
"\n",
" return \"\\n\".join(final_text)\n",
"\n",
" def chat_loop(self):\n",
" \"\"\"Run an interactive chat loop\"\"\"\n",
" print(\"\\nMCP Client Started!\")\n",
" print(\"Type your queries or 'quit' to exit.\")\n",
"\n",
" while True:\n",
" try:\n",
" query = input(\"\\nQuery: \").strip()\n",
"\n",
" if query.lower() == 'quit':\n",
" break\n",
"\n",
" response = self.process_query(query)\n",
" print(\"\\n\" + response)\n",
"\n",
" except Exception as e:\n",
" print(f\"\\nError: {str(e)}\")\n",
"\n",
" def cleanup(self):\n",
" \"\"\"Clean up resources and close the server connection.\"\"\"\n",
" print(\"Cleaning up resources...\")\n",
" asyncio.run(self.exit_stack.aclose())\n",
" print(\"Cleanup complete.\")\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "a77514d4",
"metadata": {
"execution": {
"iopub.execute_input": "2025-12-13T21:04:36.216619Z",
"iopub.status.busy": "2025-12-13T21:04:36.216324Z",
"iopub.status.idle": "2025-12-13T21:04:46.723761Z",
"shell.execute_reply": "2025-12-13T21:04:46.723393Z",
"shell.execute_reply.started": "2025-12-13T21:04:36.216599Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Connecting to server: swallow_server.py...\n",
"> \u001b[32m/var/folders/6d/3xz907yn5ylg43s2vlnnzptr0000gn/T/ipykernel_8171/148448282.py\u001b[39m(\u001b[92m36\u001b[39m)\u001b[36mconnect_to_server\u001b[39m\u001b[34m()\u001b[39m\n",
"\u001b[32m 34\u001b[39m pdb.set_trace()\n",
"\u001b[32m 35\u001b[39m \u001b[38;5;66;03m# print(server_params)\u001b[39;00m\n",
"\u001b[32m---> 36\u001b[39m response = asyncio.run(self.async_connect_to_server(server_params))\n",
"\u001b[32m 37\u001b[39m \u001b[38;5;66;03m# print(response)\u001b[39;00m\n",
"\u001b[32m 38\u001b[39m self.tools[server_script_path] = response.tools\n",
"\n"
]
},
{
"name": "stdin",
"output_type": "stream",
"text": [
"ipdb> c\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Connection successful!\n",
"Available tools: [('unladen_swallow_airspeed', 'Provides the airspeed velocity of an unladen swallow.', {'properties': {'swallow_type': {'description': \"Type of swallow: 'african' or 'european'\", 'title': 'Swallow Type', 'type': 'string'}}, 'required': ['swallow_type'], 'title': 'unladen_swallow_airspeedArguments', 'type': 'object'})]\n"
]
}
],
"source": [
"def connect():\n",
" client = MCPClient(MODEL)\n",
" client.connect_to_server('swallow_server.py')\n",
" return client\n",
"\n",
"# Run the connection and keep the client object\n",
"# This will block until the connection is established.\n",
"client = connect()\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "ed97ddd5",
"metadata": {
"execution": {
"iopub.execute_input": "2025-12-13T21:04:53.396025Z",
"iopub.status.busy": "2025-12-13T21:04:53.395867Z",
"iopub.status.idle": "2025-12-13T21:04:53.398434Z",
"shell.execute_reply": "2025-12-13T21:04:53.397965Z",
"shell.execute_reply.started": "2025-12-13T21:04:53.396013Z"
}
},
"outputs": [],
"source": [
"def run_query(query):\n",
" response = client.process_query(query)\n",
" print(\"\\n--- LLM's Response ---\")\n",
" print(response)\n",
" print(\"-------------------------\")\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a32d4403",
"metadata": {
"execution": {
"execution_failed": "2025-12-13T21:06:35.400Z"
}
},
"outputs": [],
"source": [
"# Run a query\n",
"query = \"What is the airspeed velocity of an unladen african swallow?\"\n",
"run_query(query)\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "dc502dfe",
"metadata": {
"execution": {
"iopub.execute_input": "2025-12-13T21:05:05.149131Z",
"iopub.status.busy": "2025-12-13T21:05:05.148860Z",
"iopub.status.idle": "2025-12-13T21:05:59.150794Z",
"shell.execute_reply": "2025-12-13T21:05:59.150084Z",
"shell.execute_reply.started": "2025-12-13T21:05:05.149106Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"MCP Client Started!\n",
"Type your queries or 'quit' to exit.\n"
]
},
{
"name": "stdin",
"output_type": "stream",
"text": [
"\n",
"Query: c\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Task was destroyed but it is pending!\n",
"task: <Task pending name='Task-23' coro=<_async_in_context.<locals>.run_in_context() done, defined at /opt/anaconda3/envs/mcp/lib/python3.11/site-packages/ipykernel/utils.py:57> wait_for=<Task pending name='Task-24' coro=<Kernel.shell_main() running at /opt/anaconda3/envs/mcp/lib/python3.11/site-packages/ipykernel/kernelbase.py:590> cb=[Task.__wakeup()]> cb=[ZMQStream._run_callback.<locals>._log_error() at /opt/anaconda3/envs/mcp/lib/python3.11/site-packages/zmq/eventloop/zmqstream.py:563]>\n",
"/opt/anaconda3/envs/mcp/lib/python3.11/tokenize.py:622: RuntimeWarning: coroutine 'Kernel.shell_main' was never awaited\n",
" return _tokenize(readline, None)\n",
"RuntimeWarning: Enable tracemalloc to get the object allocation traceback\n",
"Task was destroyed but it is pending!\n",
"task: <Task pending name='Task-24' coro=<Kernel.shell_main() running at /opt/anaconda3/envs/mcp/lib/python3.11/site-packages/ipykernel/kernelbase.py:590> cb=[Task.__wakeup()]>\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"> \u001b[32m/var/folders/6d/3xz907yn5ylg43s2vlnnzptr0000gn/T/ipykernel_8171/148448282.py\u001b[39m(\u001b[92m72\u001b[39m)\u001b[36mprocess_query_anthropic\u001b[39m\u001b[34m()\u001b[39m\n",
"\u001b[32m 70\u001b[39m \n",
"\u001b[32m 71\u001b[39m pdb.set_trace()\n",
"\u001b[32m---> 72\u001b[39m messages = [{\u001b[33m\"role\"\u001b[39m: \u001b[33m\"user\"\u001b[39m, \u001b[33m\"content\"\u001b[39m: query}]\n",
"\u001b[32m 73\u001b[39m available_tools = [{\n",
"\u001b[32m 74\u001b[39m \u001b[33m\"name\"\u001b[39m: tool.name,\n",
"\n"
]
},
{
"name": "stdin",
"output_type": "stream",
"text": [
"ipdb> c\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Sending query to claude-sonnet-4-20250514...\n",
"\n",
"I see you've just entered \"c\". Could you please provide more details about what you'd like to know or do? I have access to information about the airspeed velocity of unladen swallows (both African and European varieties), but I'm not sure what specific information you're looking for.\n",
"\n",
"How can I help you today?\n"
]
},
{
"name": "stdin",
"output_type": "stream",
"text": [
"\n",
"Query: what is up doc\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"> \u001b[32m/var/folders/6d/3xz907yn5ylg43s2vlnnzptr0000gn/T/ipykernel_8171/148448282.py\u001b[39m(\u001b[92m72\u001b[39m)\u001b[36mprocess_query_anthropic\u001b[39m\u001b[34m()\u001b[39m\n",
"\u001b[32m 70\u001b[39m \n",
"\u001b[32m 71\u001b[39m pdb.set_trace()\n",
"\u001b[32m---> 72\u001b[39m messages = [{\u001b[33m\"role\"\u001b[39m: \u001b[33m\"user\"\u001b[39m, \u001b[33m\"content\"\u001b[39m: query}]\n",
"\u001b[32m 73\u001b[39m available_tools = [{\n",
"\u001b[32m 74\u001b[39m \u001b[33m\"name\"\u001b[39m: tool.name,\n",
"\n"
]
},
{
"name": "stdin",
"output_type": "stream",
"text": [
"ipdb> c\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Sending query to claude-sonnet-4-20250514...\n",
"\n",
"Hello! Not much, just here and ready to help you with whatever you need. Is there anything specific you'd like to know or discuss? I have access to some specialized knowledge about unladen swallow airspeeds if you're curious about that particular topic, but I'm happy to chat about whatever's on your mind!\n"
]
},
{
"name": "stdin",
"output_type": "stream",
"text": [
"\n",
"Query: q\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"> \u001b[32m/var/folders/6d/3xz907yn5ylg43s2vlnnzptr0000gn/T/ipykernel_8171/148448282.py\u001b[39m(\u001b[92m72\u001b[39m)\u001b[36mprocess_query_anthropic\u001b[39m\u001b[34m()\u001b[39m\n",
"\u001b[32m 70\u001b[39m \n",
"\u001b[32m 71\u001b[39m pdb.set_trace()\n",
"\u001b[32m---> 72\u001b[39m messages = [{\u001b[33m\"role\"\u001b[39m: \u001b[33m\"user\"\u001b[39m, \u001b[33m\"content\"\u001b[39m: query}]\n",
"\u001b[32m 73\u001b[39m available_tools = [{\n",
"\u001b[32m 74\u001b[39m \u001b[33m\"name\"\u001b[39m: tool.name,\n",
"\n"
]
},
{
"name": "stdin",
"output_type": "stream",
"text": [
"ipdb> quit\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Error: \n"
]
},
{
"name": "stdin",
"output_type": "stream",
"text": [
"\n",
"Query: quit\n"
]
}
],
"source": [
"# Run a chat loop\n",
"client.chat_loop()\n"
]
},
{
"cell_type": "markdown",
"id": "4217df77",
"metadata": {},
"source": [
"# Claude as MCP Client\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"id": "346be417",
"metadata": {},
"source": [
"### Make an equity research report using tools\n",
"\n",
"- Write custom Python tools in [server.py](https://github.com/druce/MCP/blob/master/server.py)\n",
"- Configure this custom MCP server and others for the Claude Desktop client using [claude_desktop_config.json](https://github.com/druce/MCP/blob/master/claude_desktop_config.json) . Some tools may needs command line configs, API secrets. \n",
"- We can first send some Claude prompts to ensure certain information is in the context, then call a Deep Research prompt that uses the information in the context, retrieved using tools and deep research, to write a report according to a complex structure in the prompt.\n",
"- [Example series of prompts](https://claude.ai/share/9a679e68-469b-48b2-895a-628d933b64d9)\n",
"- [Example report](https://claude.ai/public/artifacts/736116c9-cde1-4d47-aca8-be78abfde6ab)\n",
"\n",
"\n",
"- This notebook in GitHub\n",
" - [https://github.com/druce/MCP/blob/master/Simple%20MCP%20demo.ipynb](https://github.com/druce/MCP/blob/master/Simple%20MCP%20demo.ipynb)"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "b53b1ee7",
"metadata": {
"execution": {
"iopub.execute_input": "2025-12-13T21:06:43.294387Z",
"iopub.status.busy": "2025-12-13T21:06:43.293528Z",
"iopub.status.idle": "2025-12-13T21:06:45.533683Z",
"shell.execute_reply": "2025-12-13T21:06:45.533296Z",
"shell.execute_reply.started": "2025-12-13T21:06:43.294338Z"
}
},
"outputs": [],
"source": [
"import openbb\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "522cd6aa-8466-445a-adb3-490cfe51ac6b",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "mcp",
"language": "python",
"name": "mcp"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.14"
}
},
"nbformat": 4,
"nbformat_minor": 5
}